The hypothesis that perceptual mechanisms could have more
representational and logical power than usually assumed is interesting
and provocative, especially with regard to brain evolution. However,
the importance of embodiment and grounding is exaggerated, and the
implication that there is no highly abstract representation at all,
and that human-like knowledge cannot be learned or represented without
human bodies, is very doubtful. A machine-learning model, Latent
Semantic Analysis (LSA) that closely mimics human word and passage
meaning relations is offered as a counterexample.